Flexible multi-task learning with latent task grouping
نویسندگان
چکیده
منابع مشابه
Flexible multi-task learning with latent task grouping
In multi-task learning, using task grouping structure has been shown to be effective in preventing inappropriate knowledge transfer among unrelated tasks. However, the group structure often has to be predetermined using prior knowledge or heuristics, which has no theoretical guarantee and could lead to unsatisfactory learning performance. In this paper, we present a flexible multi-task learning...
متن کاملLearning Task Grouping and Overlap in Multi-task Learning
In the paradigm of multi-task learning, multiple related prediction tasks are learned jointly, sharing information across the tasks. We propose a framework for multi-task learning that enables one to selectively share the information across the tasks. We assume that each task parameter vector is a linear combination of a finite number of underlying basis tasks. The coefficients of the linear co...
متن کاملFlexible Modeling of Latent Task Structures in Multitask Learning
Multitask learning algorithms are typically designed assuming some fixed, a priori known latent structure shared by all the tasks. However, it is usually unclear what type of latent task structure is the most appropriate for a given multitask learning problem. Ideally, the “right” latent task structure should be learned in a data-driven manner. We present a flexible, nonparametric Bayesian mode...
متن کاملTransferring Multi-device Localization Models using Latent Multi-task Learning
In this paper, we propose a latent multi-task learning algorithm to solve the multi-device indoor localization problem. Traditional indoor localization systems often assume that the collected signal data distributions are fixed, and thus the localization model learned on one device can be used on other devices without adaptation. However, by empirically studying the signal variation over differ...
متن کاملInfinite Latent SVM for Classification and Multi-task Learning
Unlike existing nonparametric Bayesian models, which rely solely on specially conceived priors to incorporate domain knowledge for discovering improved latent representations, we study nonparametric Bayesian inference with regularization on the desired posterior distributions. While priors can indirectly affect posterior distributions through Bayes’ theorem, imposing posterior regularization is...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Neurocomputing
سال: 2016
ISSN: 0925-2312
DOI: 10.1016/j.neucom.2015.12.092